摘要 :
As is known, the term virtual studio is usually referred to the integration of synthetic and real environments: if an action occurs in the real world, the same must be reproduced in the artificial scene. The integration may be rea...
展开
As is known, the term virtual studio is usually referred to the integration of synthetic and real environments: if an action occurs in the real world, the same must be reproduced in the artificial scene. The integration may be realized using one of the available systems which are generally characterized by high and expensive technology. Our work proposes an approach with two important advantages: low cost and possibility of building collaborative work. The system consists in a location named SM (generator of Synthetic worlds Meshing between real and virtual worlds) and in a differently located site indicated as V. The system SM is a powerful graphics oriented machine, which is able both to make highly realistic real time rendering of a complex virtual world, and to mesh the virtual scene with the video signal received from the V system. We suppose the V system characterized by a uniform background and a subject captured by a web cam whose video frames are sent to the SM system. In order to obtain the right information about the position of each video camera in the real world coordinate system and the zoom parameters, we propose an easy approach based on detecting the shape variations of a flag, with known aspect and dimension, placed in a defined position in the uniform background. This means that in a particular frame the scene modifications are codified in a few parameters related to the flag variations, so the integration between real and virtual becomes easy. The mesh results are sent to V, while just the selected meshed image is available for a generic user connected to the net service. The system may be applied in different contexts, for example video conferences and multiplayer virtual sets.
收起
摘要 :
As is known, the term virtual studio is usually referred to the integration of synthetic and real environments: if an action occurs in the real world, the same must be reproduced in the artificial scene. The integration may be rea...
展开
As is known, the term virtual studio is usually referred to the integration of synthetic and real environments: if an action occurs in the real world, the same must be reproduced in the artificial scene. The integration may be realized using one of the available systems which are generally characterized by high and expensive technology. Our work proposes an approach with two important advantages: low cost and possibility of building collaborative work. The system consists in a location named SM (generator of Synthetic worlds Meshing between real and virtual worlds) and in a differently located site indicated as V. The system SM is a powerful graphics oriented machine, which is able both to make highly realistic real time rendering of a complex virtual world, and to mesh the virtual scene with the video signal received from the V system. We suppose the V system characterized by a uniform background and a subject captured by a web cam whose video frames are sent to the SM system. In order to obtain the right information about the position of each video camera in the real world coordinate system and the zoom parameters, we propose an easy approach based on detecting the shape variations of a flag, with known aspect and dimension, placed in a defined position in the uniform background. This means that in a particular frame the scene modifications are codified in a few parameters related to the flag variations, so the integration between real and virtual becomes easy. The mesh results are sent to V, while just the selected meshed image is available for a generic user connected to the net service. The system may be applied in different contexts, for example video conferences and multiplayer virtual sets.
收起
摘要 :
Automatically discovering and recognizing the main structured visual pattern of an image is a challenging problem. The most difficulties are how to find the component objects and how to recognize the interaction among these object...
展开
Automatically discovering and recognizing the main structured visual pattern of an image is a challenging problem. The most difficulties are how to find the component objects and how to recognize the interaction among these objects. The component objects of the structured visual pattern have consistent 3D spatial co-occurrence layout across images, which manifest themselves as a predictable pattern called visual composite. In this paper, we propose a visual composite recognition model to automatically discover and recognize the visual composite of an image. Our model firstly learns 3D spatial co-occurrence statistics among objects to discover the potential structured visual pattern of an image so that it captures the component objects of visual composite. Secondly, we construct a feedforward architecture using the proposed factored three-way interaction machine to recognize the visual composite, which casts the recognition problem as a structured prediction task. It predicts the visual composite by maximizing the probability of the correct structured label given the component objects and their 3D spatial context. Experiments conducted on a six-class sports dataset and a phrasal recognition dataset respectively demonstrate the encouraging performance of our model in discovery precision and recognition accuracy compared with competing approaches.
收起
摘要 :
Moving in unknown environments requires real-time processing of sensor information e.g. for map reconstruction and navigation. In case of stereo image processing results have to be available in time even if high computational cost...
展开
Moving in unknown environments requires real-time processing of sensor information e.g. for map reconstruction and navigation. In case of stereo image processing results have to be available in time even if high computational costs occur. This paper addresses the problem of real-time image processing for motion planning and distance estimation based on stereo camera images. Ensuring hard real-time with complex algorithms on modern computer architectures is difficult to realize due to unpredictable execution times of algorithms with data dependent runtime. To solve this problem a scheduling scheme based on task pairs can be used to avoid an unpredicted state after a deadline violation. A new two level scheduler as extension of RTAI/Linux will be introduced. This new scheduling scheme allows to use AnyTime algorithms, which can provide results even if the task is not finished. A SURF algorithm used for stereo image processing was modified to match the AnyTime concept: First results are made available early and are improved over time. In case of a missed deadline the image processing task will not fail due to already provided results. Design considerations, implementation and results will be shown.
收起
摘要 :
Image fusion and its separation is a frequently arising issue in Image processing field. In this paper, we have described image fusion and its Separation using Scatter graphical method and Joint Probability Density Function. Fused...
展开
Image fusion and its separation is a frequently arising issue in Image processing field. In this paper, we have described image fusion and its Separation using Scatter graphical method and Joint Probability Density Function. Fused image separation using Scatter Graphical Method depend on Joint Probability density function of fused image. This technique gives batter result of other technique based on Signal Interference ratio and peak signal-to-noise ratio.
收起
摘要 :
We propose a methodology to perform real time image-based tracking on streaming 4D ultrasound data, using image registration to deduce the positioning of each ultrasound frame in a global coordinate system. Our method provides an ...
展开
We propose a methodology to perform real time image-based tracking on streaming 4D ultrasound data, using image registration to deduce the positioning of each ultrasound frame in a global coordinate system. Our method provides an alternative approach to traditional external tracking devices used for tracking probe movements. We compaxe the performance of our method against magnetic tracking on phantom and liver data, and show that our method is able to provide results in agreement with magnetic tracking.
收起
摘要 :
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the...
展开
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
收起
摘要 :
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the...
展开
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
收起
摘要 :
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the...
展开
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
收起
摘要 :
Due to the lack of high-power sources along with strong electromagnetic absorption by water vapor at frequencies between ~100 GHz and ~10 THz, there are very few radar systems, or any other systems for that matter, operating in ...
展开
Due to the lack of high-power sources along with strong electromagnetic absorption by water vapor at frequencies between ~100 GHz and ~10 THz, there are very few radar systems, or any other systems for that matter, operating in this region of the spectrum. For this reason, it is sometimes referred to as the terahertz gap. Source technology, however, is improving, thus facilitating radar systems operating in this new frontier of the electromagnetic spectrum. At the lower end of this spectral region near the millimeter/submillimeter transition, components are more readily available and atmospheric attenuation is moderate in comparison to higher frequencies. Utilizing components that can generate on the order of 50 mW of power, a real aperture radar for imaging surfaces up to several hundred meters has been developed. Transmitting a vertically oriented fan beam to scan the Field of View (FOV) in azimuth and receiving at two vertically, displaced locations with identical fan beams forming an interferometer, three dimensional images of the surface topography (in range, azimuth and height) can be generated. This paper describes the design of the prototype system and presents initial results, expanding on prior work [1].
收起